Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor of Refresh & Recovery procedures #158

Merged
merged 17 commits into from
Sep 15, 2023
Merged

Conversation

cygnusv
Copy link
Member

@cygnusv cygnusv commented Aug 31, 2023

Type of PR:

  • Bugfix
  • Feature
  • Documentation
  • Refactor
  • Other

Required reviews:

  • 1
  • 2
  • 3

What this does:
This PR is mainly a refactor of code related to refresh & recovery procedures. The main contributions are:

  • Relocation of refresh & recovery code from the tpke crate to the ferveo crate
  • Fixing incorrect refresh & recovery tests (see e.g., 8ef9e7f)
  • General improvements in tests

Issues fixed/closed:

Why it's needed:
This work is the basis to implementing the recovery & refresh protocols (see #163)

Notes for reviewers:
No breaking changes!

Work in this PR has surfaced these issues:

@piotr-roslaniec
Copy link

@cygnusv There is a build error on CI, unrelated to this error. Please see: #159

cygnusv and others added 9 commits September 11, 2023 17:50
Technically, this helper function creates a random polynomial where the `root` parameter is a root of the polynomial, so `make_random_polynomial_with_root` is a better name.
There was no function for the refresh case, and since it's very similar to the recovery case, we define here a common method for both (`prepare_share_updates_with_root`)
The concept of "threshold" doesn't belong to the abstraction level of polynomials, which actually deal with degrees. For the record, it's always the case that `threshold == degree + 1`.
Test using secret reconstruction directly, rather than via ciphertext decryption.
These functions assumed access to an update polynomial, but we're actually using several distributed polynomials (one for each participant) and we only have access to their evaluated points.

Note that the commits before did the preparation to remove this now.
Updates for shares were not applied correctly (see diff in  L418-423). For the moment, we're recovering the same domain point since it facilitates debugging, but in the next commit I'll restore the test so there's a new validator, with a random domain point.
@codecov-commenter
Copy link

Codecov Report

Merging #158 (8407f78) into main (0a73aa3) will increase coverage by 0.06%.
The diff coverage is 99.77%.

@@            Coverage Diff             @@
##             main     #158      +/-   ##
==========================================
+ Coverage   77.87%   77.94%   +0.06%     
==========================================
  Files          23       23              
  Lines        5031     5051      +20     
==========================================
+ Hits         3918     3937      +19     
- Misses       1113     1114       +1     
Files Changed Coverage Δ
ferveo/src/lib.rs 97.57% <99.18%> (+0.12%) ⬆️
ferveo/src/dkg.rs 96.11% <100.00%> (ø)
ferveo/src/pvss.rs 91.16% <100.00%> (-0.54%) ⬇️
ferveo/src/refresh.rs 100.00% <100.00%> (ø)
tpke/src/lib.rs 99.78% <100.00%> (-0.08%) ⬇️

* Removed benchmark for random polynomial with root (not much value as it's essentially benchmarking arkworks)
* Commented recovery & refresh benchmarks for when #162 is solved
@cygnusv cygnusv changed the title [WIP] Revision of Refresh & Recovery procedures Refactor of Refresh & Recovery procedures Sep 14, 2023
@cygnusv cygnusv marked this pull request as ready for review September 14, 2023 06:12
@cygnusv cygnusv added the enhancement New feature or request label Sep 14, 2023
@@ -419,28 +430,37 @@ mod test_dkg_full {
.decryption_key;

// Creates updated private key shares
// TODO: Why not using dkg.aggregate()?
Copy link

@piotr-roslaniec piotr-roslaniec Sep 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC dkg.aggregate would throw if we didn't feed enough transcripts into it.

dkg has an internal state machine that tracks the progress of the DKG ritual, and we would have to strictly adhere to its protocol in order to successfully aggregate here. Since refreshing/recovery is not implemented in the said state machine, it's easier to skip different DKG states and just call aggregate directly here.

Comment on lines +539 to +544
let deltas_i = prepare_share_updates_for_refresh::<E>(
&domain_points,
&dkg.pvss_params.h.into_affine(),
dkg.dkg_params.security_threshold as usize,
rng,
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a comment on the subject of the future refactor: We can see how this could be rewritten as let deltas_i = dkg.prepare_share_updates_for_refresh(rng);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely

Comment on lines 586 to 593
DecryptionShareSimple::create(
&validator_keypair.decryption_key,
&updated_shares.get(share_index).unwrap(),
&ciphertext.header().unwrap(),
aad,
&dkg.pvss_params.g_inv(),
)
.unwrap()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not specific to this PR but to the future refactor:
Perhaps we could avoid this section by creating a DecryptionShareSimple in pvss_aggregated.update_private_key_share_for_recovery? I think there is no other reason for moving updated_shares around other than eventually creating a DecryptionShareSimple, so we could collapse this logic into one step. Naming TBD.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, updating private shares and creating decryption shares are absolutely independent. We create decryption shares here since it's an acceptance test.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, this is completely the other way around - The nucypher usage of ferveo API confirms that it happens at two different time steps. Ursulas would never update their shares & create a decryption share at the same time, these are two completely different user flows.

use rand_core::RngCore;
use tpke::{lagrange_basis_at, PrivateKeyShare};

// SHARE UPDATE FUNCTIONS:
Copy link

@piotr-roslaniec piotr-roslaniec Sep 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could always put those functions into a module if we think it would help organize them better:

pub mod share_update {
	pub fn prepare_share_updates_for_recover ...

	pub fn apply_updates_to_private_share ...
}
// etc.

But on second thought, these sections will be different by the time we're done with the refactoring.

Anyway, just throwing it out there - It's an option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh that's a nice idea, I didn't know that. Let's consider that for the next refactor.

Comment on lines +114 to +115
debug_assert!(poly.evaluate(root) == E::ScalarField::zero());
debug_assert!(poly.coeffs.len() == degree + 1);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a comment on this macro usage:
debug_assert! doesn't show up in a release target, i.e. this assertion would be removed in production. If we want this assertion to actually throw in production, we should use assert! instead. I didn't think it would be necessary, so I opted for the former.

Comment on lines +372 to +382
// Finally, let's recreate the shared private key from the refreshed shares
let new_shared_private_key = recover_share_from_updated_private_shares(
&ScalarField::zero(),
&domain_points[..threshold],
&refreshed_shares[..threshold],
);

assert_eq!(
shared_private_key,
new_shared_private_key.private_key_share
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could test the red path here (not enough shares) similarly as we do in tdec_simple_variant_share_recovery_at_selected_point

Comment on lines +247 to +248
&domain_points[..(threshold - 1)],
&new_share_fragments[..(threshold - 1)],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could throw in an assert! here to make sure the domain_points and new_share_fragments have a proper length. It's not strictly necessary, but it relates to some of the original issues we had with those tests.

Copy link

@piotr-roslaniec piotr-roslaniec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎉

Just some comments, but nothing blocking.

Copy link
Member

@derekpierre derekpierre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎸 - a bit intricate for me to provide a comprehensive review, but looks good.

@cygnusv cygnusv merged commit a979fd6 into nucypher:main Sep 15, 2023
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Devise and test a share refreshing scenario with multiple delears
4 participants